Power Savings in Embedded Processors through Decode Filter Cache

نویسندگان

  • Weiyu Tang
  • Rajesh Gupta
  • Alexandru Nicolau
چکیده

In embedded processors, instruction fetch and decode can consume more than 40% of processor power. An instruction filter cache can be placed between the CPU core and the instruction cache to service the instruction stream. Power savings in instruction fetch result from accesses to a small cache. In this paper, we introduce decode filter cache to provide decoded instruction stream. On a hit in the decode filter cache, fetching from the instruction cache and the subsequent decoding is eliminated, which results in power savings in both instruction fetch and instruction decode. We propose to classify instructions into cacheable or uncacheable depending on the decoded width. Then sectored cache design is used in the decode filter cache so that cacheable and uncacheable instructions can coexist in a decode filter cache sector. Finally, a prediction mechanism is presented to reduce the decode filter cache miss penalty. Experimental results show average 34% processor power reduction and less than 1% performance degradation.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Synthesis of Application-Speci c Memories for Power Optimization in Em bedded Systems

This paper presents a novel approach to memory power optimization for embedded systems based on the exploitation of data locality. Locations with highest access frequency are mapped onto a small, low-power application-speci c memory which is placed close the processor. Although, in principle, a cache may be used to implement such a memory, more e cient solutions may be adopted. We propose an ar...

متن کامل

Low-Power L2 Cache Architecture for Multiprocessor System on Chip Design

Significant portion of cache energy in a highly associative cache is consumed during tag comparison. In this paper tag comparison is carried out by predicting both cache hit and cache miss using multistep tag comparison method. A partially tagged bloom filter is used for cache miss predictions by checking the non-membership of the addresses and hotline check for cache hit prediction by reducing...

متن کامل

Hardware/software Techniques for Memory Power Optimizations in Embedded Processors

Power has become one of the primary design constraints in modern microprocessors. This is all the more true in the embedded domain where designers are being pushed to create faster processors that operate for long periods of time on a single battery. It is well known that the memory sub-system is responsible for a significant percentage of the overall power dissipation. For example, in the Stro...

متن کامل

Power Optimization in L1 Cache of Embedded Processors Using CBF Based TOB Architecture

In the embedded processor, a cache could consume 40% of the entire chip power. So, reduce the high power utilization of cache is very important. To reduce the high power utilization a new cache method is used in the embedded processors. This is termed as an Early Tag Access (ETA) method. For the memory instructions, the early target way can be found by this ETA. Thereby it can reduce the high p...

متن کامل

Augmented FIFO Cache Replacement Policies for Low-Power Embedded Processors

This paper explores a family of augmented FIFO replacement policies for highly setassociative caches that are common in low-power embedded processors. In such processors, the implementation cost and complexity of the replacement policy is as important as the cache hit rate. By exploiting the cache hit way information between two replacements, the proposed replacement schemes reduce cache misses...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2001